Self-paced data augmentation for training neural networks

نویسندگان

چکیده

Data augmentation is widely used for machine learning; however, an effective method to apply data has not been established even though it includes several factors that should be tuned carefully. One such factor sample suitability, which involves selecting samples are suitable augmentation. A typical applies all training disregards may reduce classifier performance. To address this problem, we propose the self-paced (SPA) automatically and dynamically select when a neural network. The proposed mitigates deterioration of generalization performance caused by ineffective We discuss two reasons SPA works relative curriculum learning desirable changes loss function instability. Experimental results demonstrate can improve performance, particularly number small. In addition, outperforms state-of-the-art RandAugment method.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Self-paced Convolutional Neural Networks

Convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks. In order to distinguish the reliable data from the noisy and confusing data, we improve CNNs with self-paced learning (SPL) for enhancing the learning robustness of CNNs. In the proposed self-paced convolutional network (SPCN), each sample is assigned to a weight to reflect the easines...

متن کامل

Self-Paced Co-training

Notation and Definition: We assume that examples are drawn from some distributions D over an instance space X = X1 × X2, where X1 and X2 correspond to two different “views” of examples. Let c denote the target function, and let X and X− (for simplicity we assume we are doing binary classification) denote the positive and negative regions of X , respectively . For i ∈ 1, 2, let X i = {xj ∈ Xi : ...

متن کامل

ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks

We propose to learn a curriculum or a syllabus for supervised learning with deep neural networks. Specifically, we learn weights for each sample in training by an attached neural network, called ScreenerNet, to the original network and jointly train them in an end-to-end fashion. We show the networks augmented with our ScreenerNet achieve early convergence with better accuracy than the state-of...

متن کامل

Image Augmentation using Radial Transform for Training Deep Neural Networks

Deep learning models have a large number of free parameters that must be estimated by efficient training of the models on a large number of training data samples to increase their generalization performance. In real-world applications, the data available to train these networks is often limited or imbalanced. We propose a sampling method based on the radial transform in a polar coordinate syste...

متن کامل

Virtual Adversarial Training and Data Augmentation for Acoustic Event Detection with Gated Recurrent Neural Networks

In this paper, we use gated recurrent neural networks (GRNNs) for efficiently detecting environmental events of the IEEE Detection and Classification of Acoustic Scenes and Events challenge (DCASE2016). For this acoustic event detection task data is limited. Therefore, we propose data augmentation such as on-the-fly shuffling and virtual adversarial training for regularization of the GRNNs. Bot...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2021

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.02.080